The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp's lower bound and Jensen's Inequality, giving rise to a closed-form policy improvement operator. We instantiate offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark.
translated by 谷歌翻译
Adaptive mesh refinement (AMR) is necessary for efficient finite element simulations of complex physical phenomenon, as it allocates limited computational budget based on the need for higher or lower resolution, which varies over space and time. We present a novel formulation of AMR as a fully-cooperative Markov game, in which each element is an independent agent who makes refinement and de-refinement choices based on local information. We design a novel deep multi-agent reinforcement learning (MARL) algorithm called Value Decomposition Graph Network (VDGN), which solves the two core challenges that AMR poses for MARL: posthumous credit assignment due to agent creation and deletion, and unstructured observations due to the diversity of mesh geometries. For the first time, we show that MARL enables anticipatory refinement of regions that will encounter complex features at future times, thereby unlocking entirely new regions of the error-cost objective landscape that are inaccessible by traditional methods based on local error estimators. Comprehensive experiments show that VDGN policies significantly outperform error threshold-based policies in global error and cost metrics. We show that learned policies generalize to test problems with physical features, mesh geometries, and longer simulation times that were not seen in training. We also extend VDGN with multi-objective optimization capabilities to find the Pareto front of the tradeoff between cost and error.
translated by 谷歌翻译
随着自我监督学习的快速发展(例如,对比度学习),在医学图像分析中广泛认识到具有大规模图像(即使没有注释)来训练更具概括的AI模型的重要性。但是,大规模收集大规模任务的未注释数据对于单个实验室来说可能具有挑战性。现有的在线资源(例如数字书籍,出版物和搜索引擎)为获取大型图像提供了新的资源。然而,在医疗保健中发布的图像(例如放射学和病理学)由大量的带有子图的复合图组成。为了提取和分离化合物形象为下游学习的可用单个图像,我们提出了一个简单的复合图分离(SIMCFS)框架,而无需使用传统所需的检测边界框注释,并具有新的损失函数和硬案例模拟。我们的技术贡献是四倍:(1)我们引入了一个基于模拟的培训框架,该框架最小化了对资源广泛的边界框注释的需求; (2)我们提出了一种新的侧损失,可针对复合人物分离进行优化; (3)我们提出了一种阶层内图像增强方法来模拟硬病例; (4)据我们所知,这是第一项评估利用复合图像分离的自我监督学习功效的研究。从结果来看,提出的SIMCF在ImageClef 2016复合人物分离数据库上实现了最先进的性能。使用大规模开采数字的预审预革的学习模型通过对比度学习算法提高了下游图像分类任务的准确性。 SIMCF的源代码可在https://github.com/hrlblab/imageseperation上公开获得。
translated by 谷歌翻译
旨在从文本中检测事件并对其进行分类的事件检测(ED)对于理解现实生活中的实际情况至关重要。但是,主流事件检测模型需要触发器的高质量专家人类注释,这通常是昂贵的,因此阻止了ED在新领域的应用。因此,在本文中,我们专注于无触发器的低资源,并旨在应对以下艰巨的挑战:多标签分类,线索不足和事件分布不平衡。我们通过机器阅读理解(DRC)框架提出了一种新颖的无触发ED方法。更具体地说,我们将输入文本视为上下文,并将其与所有事件类型的令牌相连,后者被视为答案,并忽略了默认问题。因此,我们可以利用预训练的语言模型中的自我发作来吸收输入文本和事件类型之间的语义关系。此外,我们设计了一个简单而有效的事件毁灭模块(EDM),以防止大型事件过度学习,从而产生更平衡的训练过程。实验结果表明,我们提出的无触发ED模型与基于主流触发器的模型非常有竞争力,显示了其在低源事件检测上的强劲性能。
translated by 谷歌翻译
尽管机器学习模型迅速推进了各种现实世界任务的最先进,但鉴于这些模型对虚假相关性的脆弱性,跨域(OOD)的概括仍然是一个挑战性的问题。尽管当前的域概括方法通常着重于通过新的损耗函数设计在不同域上实施某些不变性属性,但我们提出了一种平衡的迷你批次采样策略,以减少观察到的训练分布中域特异性的虚假相关性。更具体地说,我们提出了一种两步方法,该方法1)识别虚假相关性的来源,以及2)通过在确定的来源上匹配,构建平衡的迷你批次而没有虚假相关性。我们提供了伪造来源的可识别性保证,并表明我们提出的方法是从所有培训环境中平衡,无虚拟分布的样本。实验是在三个具有伪造相关性的计算机视觉数据集上进行的,从经验上证明,与随机的迷你批次采样策略相比,我们平衡的微型批次采样策略可改善四个不同建立的域泛化模型基线的性能。
translated by 谷歌翻译
最近,变压器架构已经证明了其在自然语言处理(NLP)和计算机视觉(CV)任务中的重要性。虽然已知其他网络模型容易受到后门攻击的影响,但是在模型中嵌入触发器并在呈现触发器时控制模型行为,众所周知,这种攻击是否仍然在变压器模型上仍然有效,如果是的话,是否有效它可以以更具成本效益的方式完成。在本文中,我们提出DBIA,一种对CV导向的变压器网络的一种新型无数据响应攻击,利用变压器的固有注意机制来产生触发器并使用中毒代理数据集注入后门。我们在两个主流图像分类任务中基于三个基准变压器,即Vit,Deit和Swin变压器进行了广泛的实验..,Cifar10和ImageNet。评估结果表明,消耗较少的资源,我们的方法可以嵌入高层的成功率和对受害者变压器性能的低影响。我们的代码可在https://anonmous.4open.science/r/dbia-825d获得。
translated by 谷歌翻译
尽管基于模型的增强学习(RL)方法被认为是更具样本的高效,但现有算法通常依赖于复杂的规划算法与模型学习过程紧密粘合。因此,学习模型可能缺乏与更专业规划者重新使用的能力。在本文中,我们解决了这个问题,并提供了在没有奖励信号的指导的情况下有效地学习RL模型的方法。特别是,我们采取了一个插件求解器方法,我们专注于在探索阶段学习模型,并要求在学习模型上的\ emph {任何规划算法}可以给出近最佳的政策。具体而言,我们专注于线性混合MDP设置,其中概率转换矩阵是一组现有模型的(未知)凸面组合。我们表明,通过建立新的探索算法,即插即用通过\ tilde {o}来学习模型(d ^ 2h ^ 3 / epsilon ^ 2)$与环境交互,\ emph {任何} $ \ epsilon $ -optimal Planner在模型上给出$ O(\ epsilon)$ - 原始模型上的最佳政策。此示例复杂性与非插入方法的下限与下限匹配,并且是\ EMPH {统计上最佳}。我们通过利用使用伯尔斯坦不等式和指定的线性混合MDP的属性来实现仔细的最大总差异来实现这一结果。
translated by 谷歌翻译
A crucial issue of current text generation models is that they often uncontrollably generate factually inconsistent text with respective of their inputs. Limited by the lack of annotated data, existing works in evaluating factual consistency directly transfer the reasoning ability of models trained on other data-rich upstream tasks like question answering (QA) and natural language inference (NLI) without any further adaptation. As a result, they perform poorly on the real generated text and are biased heavily by their single-source upstream tasks. To alleviate this problem, we propose a weakly supervised framework that aggregates multiple resources to train a precise and efficient factual metric, namely WeCheck. WeCheck first utilizes a generative model to accurately label a real generated sample by aggregating its weak labels, which are inferred from multiple resources. Then, we train the target metric model with the weak supervision while taking noises into consideration. Comprehensive experiments on a variety of tasks demonstrate the strong performance of WeCheck, which achieves a 3.4\% absolute improvement over previous state-of-the-art methods on TRUE benchmark on average.
translated by 谷歌翻译
Recent advances in operator learning theory have improved our knowledge about learning maps between infinite dimensional spaces. However, for large-scale engineering problems such as concurrent multiscale simulation for mechanical properties, the training cost for the current operator learning methods is very high. The article presents a thorough analysis on the mathematical underpinnings of the operator learning paradigm and proposes a kernel learning method that maps between function spaces. We first provide a survey of modern kernel and operator learning theory, as well as discuss recent results and open problems. From there, the article presents an algorithm to how we can analytically approximate the piecewise constant functions on R for operator learning. This implies the potential feasibility of success of neural operators on clustered functions. Finally, a k-means clustered domain on the basis of a mechanistic response is considered and the Lippmann-Schwinger equation for micro-mechanical homogenization is solved. The article briefly discusses the mathematics of previous kernel learning methods and some preliminary results with those methods. The proposed kernel operator learning method uses graph kernel networks to come up with a mechanistic reduced order method for multiscale homogenization.
translated by 谷歌翻译